منابع مشابه
Online Learning: Fundamental Algorithms
The Learning Theory 1, is somewhat least practical part of Machine Learning, which is most about the theoretical guarantees in learning concepts, in different conditions and scenarios. These guarantees are usually expressed in the form of probabilistic concentration of some measure, around some optimal value which is unknown and need to discovered. These bounds are functions of problem specific...
متن کاملOnline Learning Algorithms
In this paper, we study an online learning algorithm in Reproducing Kernel Hilbert Spaces (RKHS) and general Hilbert spaces. We present a general form of the stochastic gradient method to minimize a quadratic potential function by an independent identically distributed (i.i.d.) sample sequence, and show a probabilistic upper bound for its convergence.
متن کاملOnline Gradient Descent Learning Algorithms
This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without explicit regularization. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately c...
متن کاملOnline Pairwise Learning Algorithms with Kernels
Pairwise learning usually refers to a learning task which involves a loss function depending on pairs of examples, among which most notable ones include ranking, metric learning and AUC maximization. In this paper, we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS), which we refer to as th...
متن کاملSOLAR: Scalable Online Learning Algorithms for Ranking
Traditional learning to rank methods learn ranking models from training data in a batch and offline learning mode, which suffers from some critical limitations, e.g., poor scalability as the model has to be retrained from scratch whenever new training data arrives. This is clearly nonscalable for many real applications in practice where training data often arrives sequentially and frequently. T...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Foundations of Computational Mathematics
سال: 2005
ISSN: 1615-3375,1615-3383
DOI: 10.1007/s10208-004-0160-z